人工神经网络的扩展不断增加,在超功率边缘设备上不会停止。但是,这些通常具有很高的计算需求,并且需要专门的硬件加速器,以确保设计达到功率和性能限制。神经网络的手动优化以及相应的硬件加速器可能非常具有挑战性。本文介绍了Hannah(硬件加速器和神经网络搜索),该框架是针对深神经网络和硬件加速器的自动化和组合的硬件/软件共同设计,用于资源和功率受限的边缘设备。优化方法使用基于进化的搜索算法,一种神经网络模板技术以及可配置的Ultratrail硬件加速器模板的分析KPI模型,以找到优化的神经网络和加速器配置。我们证明,汉娜(Hannah)可以找到适合不同音频分类任务的功耗和高精度的合适神经网络,例如单级唤醒单词检测,多级关键字检测和语音活动检测,这些操作优于相关工作。
translated by 谷歌翻译
The deployment of neural networks on heterogeneous SoCs coupled with custom accelerators is a challenging task because of the lack of end-to-end software tools provided for these systems. Moreover, the already available low level schedules and mapping strategies provided by the accelerator developers for typical tensor operations are not necessarily the best possible ones for each particular use case. This is why frameworks which automatically test the performance of the generated code on a specific hardware configuration are of special interest. In this work, the integration between the code generation framework TVM and the systolic array-based accelerator Gemmini is presented. A generic schedule to offload the GEneral Matrix Multiply (GEMM) tensor operation onto Gemmini is detailed, and its suitability is tested by executing the AutoTVM tuning process on it. Our generated code achieves a peak throughput of 46 giga-operations per second (GOPs) under a 100 MHz clock on a Xilinx ZCU102 FPGA, outperforming previous work. Furthermore, the code generated by this integration was able to surpass the default hand-tuned schedules provided by the Gemmini developers in real-world workloads.
translated by 谷歌翻译
现在,整个研究社区都可以广泛使用机器学习(ML),它促进了这些新兴的数学技术在广泛学科中的新型和引人注目的应用的扩散。在本文中,我们将重点介绍一个特定的案例研究:古人类学领域,该领域旨在根据生物学和文化证据理解人类的演变。正如我们将表明的那样,ML算法的易用性以及在人类学研究界的适当使用方面缺乏专业知识,导致了整个文献中出现的基本错误应用。结果不可靠的结果不仅破坏了将ML合法纳入人类学研究的努力,而且还会对我们的人类进化和行为过去产生潜在的理解。本文的目的是简要介绍古人类学中ML的某些方式;我们还为那些与该领域完全熟悉的人提供了一些基本ML算法的调查,而该领域仍在积极发展。我们讨论了一系列的错误,错误和违反正确的ML方法方案的行为,这些方法经常在人类学文献的积累体内出现令人不安。这些错误包括使用过时的算法和实践;不适当的火车/测试拆分,样本组成和文本解释;以及由于缺乏数据/代码共享以及随后对独立复制的限制而缺乏透明度。我们断言,扩大样本,共享数据和代码,重新评估同行评审的方法,以及最重要的是,开发包括ML专家在内的跨学科团队对于将ML在人类学中纳入ML的未来研究的进步都是必要的。
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译